9 research outputs found
Accelerating Large-Scale Data Analysis by Offloading to High-Performance Computing Libraries using Alchemist
Apache Spark is a popular system aimed at the analysis of large data sets,
but recent studies have shown that certain computations---in particular, many
linear algebra computations that are the basis for solving common machine
learning problems---are significantly slower in Spark than when done using
libraries written in a high-performance computing framework such as the
Message-Passing Interface (MPI).
To remedy this, we introduce Alchemist, a system designed to call MPI-based
libraries from Apache Spark. Using Alchemist with Spark helps accelerate linear
algebra, machine learning, and related computations, while still retaining the
benefits of working within the Spark environment. We discuss the motivation
behind the development of Alchemist, and we provide a brief overview of its
design and implementation.
We also compare the performances of pure Spark implementations with those of
Spark implementations that leverage MPI-based codes via Alchemist. To do so, we
use data science case studies: a large-scale application of the conjugate
gradient method to solve very large linear systems arising in a speech
classification problem, where we see an improvement of an order of magnitude;
and the truncated singular value decomposition (SVD) of a 400GB
three-dimensional ocean temperature data set, where we see a speedup of up to
7.9x. We also illustrate that the truncated SVD computation is easily scalable
to terabyte-sized data by applying it to data sets of sizes up to 17.6TB.Comment: Accepted for publication in Proceedings of the 24th ACM SIGKDD
International Conference on Knowledge Discovery and Data Mining, London, UK,
201
Accelerating Large-Scale Data Analysis by Offloading to High-Performance Computing Libraries using Alchemist
Apache Spark is a popular system aimed at the analysis of large data sets,
but recent studies have shown that certain computations---in particular, many
linear algebra computations that are the basis for solving common machine
learning problems---are significantly slower in Spark than when done using
libraries written in a high-performance computing framework such as the
Message-Passing Interface (MPI).
To remedy this, we introduce Alchemist, a system designed to call MPI-based
libraries from Apache Spark. Using Alchemist with Spark helps accelerate linear
algebra, machine learning, and related computations, while still retaining the
benefits of working within the Spark environment. We discuss the motivation
behind the development of Alchemist, and we provide a brief overview of its
design and implementation.
We also compare the performances of pure Spark implementations with those of
Spark implementations that leverage MPI-based codes via Alchemist. To do so, we
use data science case studies: a large-scale application of the conjugate
gradient method to solve very large linear systems arising in a speech
classification problem, where we see an improvement of an order of magnitude;
and the truncated singular value decomposition (SVD) of a 400GB
three-dimensional ocean temperature data set, where we see a speedup of up to
7.9x. We also illustrate that the truncated SVD computation is easily scalable
to terabyte-sized data by applying it to data sets of sizes up to 17.6TB.Comment: Accepted for publication in Proceedings of the 24th ACM SIGKDD
International Conference on Knowledge Discovery and Data Mining, London, UK,
201
The discrete adjoint method for high-order time-stepping methods
This thesis examines the derivation and implementation of the discrete adjoint method for several time-stepping methods. Our results are important for gradient-based numerical optimization in the context of large-scale model calibration problems that are constrained by nonlinear time-dependent PDEs. To this end, we discuss finding the gradient and the action of the Hessian of the data misfit function with respect to three sets of parameters: model parameters, source parameters and the initial condition. We also discuss the closely related topic of computing the action of the sensitivity matrix on a vector, which is required when performing a sensitivity analysis. The gradient and Hessian of the data misfit function with respect to these parameters requires the derivatives of the misfit with respect to the simulated data, and we give the procedures for computing these derivatives for several data misfit functions that are of use in seismic imaging and elsewhere. The methods we consider can be divided into two categories, linear multistep (LM) methods and Runge-Kutta (RK) methods, and several variants of these are discussed. Regular LM and RK methods can be used for ODE systems arising from the semi-discretization of general nonlinear time-dependent PDEs, whereas implicit-explicit and staggered variants can be applied when the PDE has a more specialized form. Exponential time-differencing RK methods are also discussed. The implementation of the associated adjoint time-stepping methods is discussed in detail. Our motivation is the application of the discrete adjoint method to high-order time-stepping methods, but the approach taken here does not exclude lower-order methods. All of the algorithms have been implemented in MATLAB using an object-oriented design and are written with extensibility in mind. For exponential RK methods it is illustrated numerically that the adjoint methods have the same order of accuracy as their corresponding forward methods, and for linear PDEs we give a simple proof that this must always be the case. The applicability of some of the methods developed here to pattern formation problems is demonstrated using the Swift-Hohenberg model.Science, Faculty ofMathematics, Department ofGraduat